unfair rating attack
Is It Harmful When Advisors Only Pretend to Be Honest?
Wang, Dongxia (Nanyang Technological University) | Muller, Tim (Nanyang Technological University) | Zhang, Jie (Nanyang Technological University) | Liu, Yang (Nanyang Technological University)
In trust systems, unfair rating attacks — where advisors provide ratings dishonestly — influence the accuracy of trust evaluation. A secure trust system should function properly under all possible unfair rating attacks; including dynamic attacks. In the literature, camouflage attacks are the most studied dynamic attacks. But an open question is whether more harmful dynamic attacks exist. We propose random processes to model and measure dynamic attacks. The harm of an attack is influenced by a user's ability to learn from the past. We consider three types of users: blind users, aware users, and general users. We found for all the three types, camouflage attacks are far from the most harmful. We identified the most harmful attacks, under which we found the ratings may still be useful to users.
- Asia > Singapore (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Security & Privacy (0.68)
- Government (0.68)
Quantifying Robustness of Trust Systems against Collusive Unfair Rating Attacks Using Information Theory
Wang, Dongxia (Nanyang Technological University) | Muller, Tim (Nanyang Technological University) | Zhang, Jie (Nanyang Technological University) | Liu, Yang (Nanyang Technological University)
Unfair rating attacks happen in existing trust and reputation systems, lowering the quality of the systems. There exists a formal model that measures the maximum impact of independent attackers [Wang et al., 2015] — based on information theory. We improve on these results in multiple ways: (1) we alter the methodology to be able to reason about colluding attackers as well, and (2) we extend the method to be able to measure the strength of any attacks (rather than just the strongest attack). Using (1), we identify the strongest collusion attacks, helping construct robust trust system. Using (2), we identify the strength of (classes of) attacks that we found in the literature. Based on this, we help to overcome a shortcoming of current research into collusion-resistance — specific (types of) attacks are used in simulations, disallowing direct comparisons between analyses of systems.
- Asia > Singapore (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Government (0.68)
- Information Technology > Security & Privacy (0.47)
- Information Technology > Services (0.46)
Quantifying and Improving the Robustness of Trust Systems
Wang, Dongxia (Nanyang Technological University)
Trust systems are widely used to facilitate interactions among agents based on trust evaluation. These systems may have robustness issues, that is, they are affected by various attacks. Designers of trust systems propose methods to defend against these attacks. However, they typically verify the robustness of their defense mechanisms (or trust models) only under specific attacks. This raises problems: first, the robustness of their models is not guaranteed as they do not consider all attacks. Second, the comparison between two trust models depends on the choice of specific attacks, introducing bias. We propose to quantify the strength of attacks, and to quantify the robustness of trust systems based on the strength of the attacks it can resist.Our quantification is based on information theory, and provides designers of trust systems a fair measurement of the robustness.
Safeguarding E-Commerce against Advisor Cheating Behaviors: Towards More Robust Trust Models for Handling Unfair Ratings
In electronic marketplaces, after each transaction buyers will rate the products provided by the sellers. To decide the most trustworthy sellers to transact with, buyers rely on trust models to leverage these ratings to evaluate the reputation of sellers. Although the high effectiveness of different trust models for handling unfair ratings have been claimed by their designers, recently it is argued that these models are vulnerable to more intelligent attacks, and there is an urgent demand that the robustness of the existing trust models has to be evaluated in a more comprehensive way. In this work, we classify the existing trust models into two broad categories and propose an extendable e-marketplace testbed to evaluate their robustness against different unfair rating attacks comprehensively. On top of highlighting the robustness of the existing trust models for handling unfair ratings is far from what they were claimed to be, we further propose and validate a novel combination mechanism for the existing trust models, Discount-then-Filter, to notably enhance their robustness against the investigated attacks.
- Information Technology > Security & Privacy (1.00)
- Information Technology > Services > e-Commerce Services (0.65)